AI – doing new things that used to be impossible
10 Nov 2025
Interview with AI researcher Björn Ommer about the potential of artificial intelligence.
10 Nov 2025
Interview with AI researcher Björn Ommer about the potential of artificial intelligence.
Björn Ommer is a full professor at LMU, where he heads the Computer Vision & Learning research group. In addition, he is Chief AI Officer at LMU and co-chairperson of the Bavarian AI Council. In our interview, the LMU computer scientist discusses the latest developments in artificial intelligence and the further possibilities this new technology will open up for society and business. As part of the AI symposium at LMU on 10 November, Björn Ommer will be giving a lecture on “The Transformative Power of AI: From Information to Knowledge Society.”
Many people are using AI applications like Chat-GPT. Does this show that the possibilities opened up by AI are being embraced?
Björn Ommer: AI is a whole lot more than the large language models. But if we take Chat-GPT as an example: Germans are among the top when it comes to usership. I think the acceptance is there and we see the demand. At the same time, many people have questions about what AI is doing to us and how it will change their jobs.
We can see two camps here, which are drifting apart. There are people, especially in the business world, who are using AI in highly informed ways and genuinely creating added value. By contrast, there are other users who certainly have catching up to do.Björn Ommer, heads the Computer Vision & Learning research group and is Chief AI Officer at LMU
Would you say people are sufficiently clued in when it comes to using AI?
We can see two camps here, which are drifting apart. There are people, especially in the business world, who are using AI in highly informed ways and genuinely creating added value. By contrast, there are other users who certainly have catching up to do. But, and I’d really like to emphasize this: We’re dealing with a technology that’s just three years in the making. We’re looking over the shoulders of researchers, as it were, and using their findings while they’re still being developed. If we compare this to how long it took before the internet or cellphones were widely adopted in society, we see that it’s happening much faster with AI.
Of course, the technology needs further improvement, and we need to understand in particular what we can use it for. I think there’s significant potential for added value here. The first step is always that a new technology is used to optimize existing functionalities or as a more efficient replacement for them. But the real potential of this technology is unlocked when we can do new things that used to be impossible. Too many companies are currently fixated on pure rationalization through AI. The real prize is to create brand new things. And this is part of the reason why we’ve integrated AI very deeply into research and teaching here at LMU and want to train a new generation that carries this knowledge into society.
As a comprehensive research-intensive university, LMU is genuinely in a great position when it comes to AI.Björn Ommer, heads the Computer Vision & Learning research group and is Chief AI Officer at LMU
You yourself engage in interdisciplinary work. How important is cooperation across expertise boundaries for AI research?
Artificial intelligence is interdisciplinary by definition and cannot be reduced to a niche area. AI research, but also application, benefits when as many different domains and modalities as possible are brought together. For example, there is no pure large language model anymore, as the models were trained with pictures and videos as well. Just to improve the technology, therefore, various domains are needed. And when we further consider that there are ethical, social, and other aspects to the development of artificial intelligence, then there is no way around the bringing together of various disciplines.
As a comprehensive research-intensive university, LMU is genuinely in a great position when it comes to AI. You see, AI is not just pure engineering, but requires a holistic perspective where various domains come together and social questions play a role.
The goal in the medium term will be for systems to learn more effectively with less computing resources – that is to say, with fewer but more carefully chosen data and through the avoidance of repetitions.Björn Ommer, heads the Computer Vision & Learning research group and is Chief AI Officer at LMU
Language models like Chat-GPT consume huge amounts of computing resources. In an interview on the LMU website two years ago, you cautioned that development was at a crossroads and that you were not going to take part in the scaling race. What has happened since, and where do things go from here?
We’re seeing precisely these dynamics unfold. Even for the large companies, development has become so expensive that they cannot just continue chucking more data into the models and making them even larger. Another problem is that genuinely new data are needed to make progress in development. More of the same will not help. And then there are the lawsuits from content providers who are angry that AI companies used their data without paying. This makes it clear that model training cannot continue indefinitely as before. We’re also seeing that real intelligence does not come about by just scaling ever higher. The goal in the medium term will be for systems to learn more effectively with less computing resources – that is to say, with fewer but more carefully chosen data and through the avoidance of repetitions.
AI is a powerful technology which becomes a magnifying glass for problems that already exist, and points to housekeeping which we as a society have shirked to date – on social networks, for instance.Björn Ommer, heads the Computer Vision & Learning research group and is Chief AI Officer at LMU
What changes when AI needs so little computing capacity that it runs on consumer hardware?
When we developed the “Stable Diffusion” AI image generator, our goal was that after training the models it would be possible to use them locally on smartphones. This reduces dependency on foreign cloud providers, enhances data protection, and strengthens our sovereignty. This is becoming increasingly important with the current trend toward AI applications. I see the large foundation models as the operating system for the AI of the future. But users do not work on their PCs with command lines. They use apps to communicate, without which they could not work easily and efficiently with their PCs. Accordingly, we will not be interacting primarily with the foundation models in the future, but with AI apps derived from them. And then we will see much more targeted, more compact applications. If you want to quickly dictate something on your smartphone – in your email program, for instance – you do not need a model with many billions of parameters; something more compact and more targeted to the application is quite sufficient.
Does this also mean that the applications will become more individualized? If so, could this entail the risk of fragmenting society?
Running locally does not by itself mean that the AI which you and I have are different. But certainly it’s already possible to personalize language models: On Chat-GPT, for example, you can choose what personality the model should have, whether friendly or businesslike. And if you use the memory function, the AI learns continuously to adapt itself to you, to accommodate your preferences. But how far should the AI go? When should it contradict the user, or at least introduce other opinions? These are not new questions. Rather, it is becoming apparent that AI is a powerful technology which becomes a magnifying glass for problems that already exist, and points to housekeeping which we as a society have shirked to date – on social networks, for instance.
We find ourselves in an information society, in which we’re practically drowning in information. But what we’re lacking is targeted and individual access to the knowledge behind it. AI can help here – to reach emergence, so that through the connection of individual data points something new arises, something personalized, knowledge that can be realized, or in short: “actionable knowledge.”Björn Ommer, heads the Computer Vision & Learning research group and is Chief AI Officer at LMU
is himself an AI researcher and co-developed the “Stable Diffusion” AI image generator. “It’s exciting to see how widely this technology is already in use, although naturally it’s not finished yet.” | © Ansgar Pudenz
You give many lectures and interviews. What is your impression: Do media and the general public have an accurate sense of AI and its significance?
No is the blunt answer. The vast majority of people have no idea how large the development really is. In the case of generative intelligence, many people think only of the creation of texts, images, or videos. But we will see many valuable use cases beyond the generation of content. For example, AIs can process and contextualize even unstructured, heterogeneous data. For academic research in particular this will be a gamechanger.
I see artificial intelligence as an enabling technology that will help us transition from an information to a knowledge society. I think knowledge is produced where things come together that are not on the same page in a book, but inhabit entirely different contexts, and even come from fields that are far apart. And if AI helps this process, it will become a facilitator of further knowledge. We find ourselves in an information society, in which we’re practically drowning in information. But what we’re lacking is targeted and individual access to the knowledge behind it. AI can help here – to reach emergence, so that through the connection of individual data points something new arises, something personalized, knowledge that can be realized, or in short: “actionable knowledge.” So it’s not just about bare facts, but knowledge that is directly personalized for me as a user and furnishes information for my specific actions.
We cannot even imagine what it means to transform from a raw information-led society to a knowledge society, in which intelligence becomes a commodity that is scalable, extending in depth and breadth at once, and becomes accessible and affordable for all.
An AI symposium is taking place at LMU on 10 November. Professor Björn Ommer will be giving the keynote lecture on “The Transformative Power of AI: From Information to Knowledge Society.” Among other events, the newly appointed AI professors will be introducing themselves, as will the “AI and Transfer/Innovation” task force. “I’m excited about the exchange of ideas and the fresh impetus from the various specialist fields,” says Ommer.
AI symposium at LMU